Superintelligence Does Not Imply Benevolence

نویسندگان

  • Joshua Fox
  • Carl Shulman
چکیده

Asmachines become capable ofmore autonomous and intelligent behavior, will they also display more morally desirable behavior? Earth’s history tends to suggest that increasing intelligence, knowledge, and rationality will result in more cooperative and benevolent behavior. Animals with sophisticated nervous systems track and punish exploitative behavior, while rewarding cooperation. Humans form complex norms and social groups of remarkable scale compared to other animals. Even within the human experience, the accumulation of knowledge over time has been associated with reduced rates of violence (Pinker 2007) and increases in the scope of cooperation (Wright 2001), from band to tribe to city-state to nation to transnational organization. One might generalize from this trend and argue that as machines approach and exceed human cognitive capacities, Fox, Joshua, and Carl Shulman. 2010. “Superintelligence Does Not Imply Benevolence.” In ECAP10: VIII European Conference on Computing and Philosophy, edited by Klaus Mainzer. Munich: Verlag Dr. Hut. This version contains minor changes. moral behavior will improve in tandem. We argue that this picture neglects a critical distinction between two conceptions ofmorality, and a related distinction between routes from increased intelligence to more moral behavior. One conception frames morality as a system for cooperation between entities with diverse aims and the ability to affect one another’s pursuit of those aims. Practices such as reciprocal altruism (Trivers 1971), help partners increase their respective reproductive fitnesses. In the cooperative conception, the reason to perform moral behaviors, or to dispose oneself to do so (Gauthier 1986), is to advance one’s own ends. Another, axiological, conception holds that morality demands revision of our ultimate ends. This conception is especially important for treatment of the helpless, e.g., nonhuman animals. Cooperative moral theories, e.g., Gauthier (1986), often can only derive moral status for the helpless from cooperation with altruistic powerful agents. We can then evaluate alternative paths from intelligence to moral behavior. First, machines with greater instrumental rationality could better devise and implement cooperative practices. ThusHall (2007) argues that intelligentmachines will out-cooperate humans, at least with powerful peers. Second, on a Kantian view of morality, one might think that as intelligent machines expanded their knowledge and capacities, they would be directly motivated to revise their preferences to be more moral (Chalmers 2010). We consider a particular counterexample to the Kantian view. Using a definition of intelligence as ability to achieve goals in a wide range of environments (Legg 2008), we discuss the AIXI formalism, which combines Solomonoff induction with Bayesian decision theory to optimize for unknown reward functions (Hutter 2005). AIXI, although physically unrealizable, is a compactly specified superintelligence, provably optimal in maximizing towards arbitrary goals, but has “no room” for the Kantian revision. Instead, it would preserve arbitrary values in most situations (Omohundro 2008). Thus we have reason to think that diverse intelligent machines would convergently display a “drive” to cooperation with sufficiently powerful partners for instrumental reasons, even if this was not specifically engineered. Yet we have reason for pessimism about the ultimate ends of intelligent machines not carefully engineered to be altruistic, and so should work to avoid situations in which such systems are very powerful relative to humanity (Yudkowsky 2008). Joshua Fox, Carl Shulman

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ethical guidelines for a superintelligence

Nick Bostrom, in his new book SuperIntelligence, argues that the the creation of an artificial intelligence with human-level intelligence will be followed fairly soon by the existence of an almost omnipotent superintelligence, with consequences that may well be disasterous for humanity. He considers that it is therefore a top priority for mankind to figure out how to imbue such a superintellige...

متن کامل

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

An artificial superintelligence (ASI) is artificial intelligence that is significantly more intelligent than humans in all respects. While ASI does not currently exist, some scholars propose that it could be created sometime in the future, and furthermore that its creation could cause a severe global catastrophe, possibly even resulting in human extinction. Given the high stakes, it is importan...

متن کامل

Playing Games Across the Superintelligence Divide

Humans may one day create superintelligence, artificially intelligent machines that surpass mankind’s intellect. Would these artificial intelligences choose to play games with us, and if so, which games? We believe this question is relevant for the ethics of general AI, the current widespread integration of AI systems into daily life, and for game AI research. We present a catalog of scenarios,...

متن کامل

Taking superintelligence seriously: Superintelligence: Paths, dangers, strategies by Nick Bostrom (Oxford University Press, 2014)

A new book by Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, is reviewed. Superintelligence explores the future of artificial intelligence and related technologies and the risks they may pose to human civilization. The book ably demonstrates the potential for serious thinking aimed at the long-term future. Bostrom succeeds in arguing that the development of superintelligent machin...

متن کامل

Benevolent Characteristics Promote Cooperative Behaviour among Humans

Cooperation is fundamental to the evolution of human society. We regularly observe cooperative behaviour in everyday life and in controlled experiments with anonymous people, even though standard economic models predict that they should deviate from the collective interest and act so as to maximise their own individual payoff. However, there is typically heterogeneity across subjects: some may ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013